38 research outputs found

    Issues of using wireless sensor network to monitor urban air quality

    Get PDF
    Frequent monitoring of urban environment has now been regulated in most EU countries. Due to the design and cost of high-quality sensors, the current approach using these sensors may not provide data with an appropriate spatial and temporal resolution. As a result, using a wireless sensor network constructed by a large number of low-cost sensors is becoming increasingly popular to support the monitoring of urban environments. However, in practice, there are many issues that prevent such networks to be widely adopted. In this paper, we use data and lessons learnt from three real deployments to illustrate those issues. The issues are classified into three main categories and discussed according to the different sensing stages. In the end, we summarise a list of open challenges which we believe are significant for the future research

    Analysis and Optimization of Message Acceptance Filter Configurations for Controller Area Network (CAN)

    Get PDF
    Many of the processors used in automotive Electronic Control Units (ECUs) are resource constrained due to the cost pressures of volume production; they have relatively low clock speeds and limited memory. Controller Area Network (CAN) is used to connect the various ECUs; however, the broadcast nature of CAN means that every message transmitted on the network can potentially cause additional processing load on the receiving nodes, whether the message is relevant to that ECU or not. Hardware filters can reduce or even eliminate this unnecessary load by filtering out messages that are not needed by the ECU. Filtering is done on the message IDs which are primarily used to identify the contents of the message and its priority. In this paper, we consider the problem of selecting filter configurations to minimize the load due to undesired messages. We show that the general problem is NP-complete. We therefore propose and evaluate an approach based on Simulated Annealing. We show that this approach nds near-optimal filter configurations for the interesting case where there are more desired messages than available filters

    An Enhanced Bailout Protocol for Mixed Criticality Embedded Software

    Get PDF
    To move mixed criticality research into industrial practice requires models whose run-time behaviour is acceptable to systems engineers. Certain aspects of current models, such as abandoning lower criticality tasks when certain situations arise, do not give the robustness required in application domains such as the automotive and aerospace industries. In this paper a new bailout protocol is developed that still guarantees high criticality software but minimises the negative impact on lower criticality software via a timely return to normal operation. We show how the bailout protocol can be integrated with existing techniques, utilising both offline slack and online gain-time to further improve performance. Static analysis is provided for schedulability guarantees, while scenario-based evaluation via simulation is used to explore the effectiveness of the protocol

    An Improved Sensor Calibration with Anomaly Detection and Removal

    Get PDF
    Sensor calibration is a widely adopted process for improving data quality of low-cost sensors. However, such a process may not address data issues caused by anomalies. Anomalies are considered as data errors that are inconsistent to the actual physical phenomena. This paper presents an improved sensor calibration, which applies a process for detection and removal of anomalies before the sensor calibration process. A Bayesian-based method is used for anomaly detection that takes advantage of cross-sensitive parameters in a sensor array. The method utilises dependencies between cross-sensitive parameters, which allows underlying physical phenomena to be modelled and anomalies to be detected. The calibration approach is based on stepwise regression, which automatically and systematically selects suitable supporting parameters for a calibration function. The evaluation for anomaly detection shows that the results are better than the state-of-the-art methods, in terms of accuracy, precision and completeness. The overall evaluation confirms that data quality can be further enhanced when anomalies are removed before the calibration

    Generating Utilization Vectors for the Systematic Evaluation of Schedulability Tests

    Get PDF
    —This paper introduces the Dirichlet-Rescale (DRS) algorithm. The DRS algorithm provides an efficient general-purpose method of generating n-dimensional vectors of components (e.g. task utilizations), where the components sum to a specified total, each component conforms to individual constraints on the maximum and minimum values that it can take, and the vectors are uniformly distributed over the valid region of the domain of all possible vectors, bounded by the constraints. The DRS algorithm can be used to improve the nuance and quality of empirical studies into the effectiveness of schedulability tests for real-time systems; potentially making them more realistic, and leading to new conclusions. It is efficient enough for use in large-scale studies where millions of task sets need to be generated. Further, the constraints on individual task utilizations can be used for fine-grained control of task set parameters enabling more detailed exploration of schedulability test behavior. Finally, the real power of the algorithm lies in the fact that it can be applied recursively, with one vector acting as a set of constraints for the next. This is particularly useful in task set generation for mixed criticality systems and multi-core systems, where task utilizations are either multi-valued or can be decomposed into multiple constituent part

    Robust Mixed-Criticality Systems

    Get PDF
    Certification authorities require correctness and survivability. In the temporal domain this requires a convincing argument that all deadlines will be met under error free conditions, and that when certain defined errors occur the behaviour of the system is still predictable and safe. This means that occasional execution-time overruns should be tolerated and where more severe errors occur levels of graceful degradation should be supported. With mixed-criticality systems, fault tolerance must be criticality aware, i.e. some tasks should degrade less than others. In this paper a quantitative notion of robustness is defined, and it is shown how fixed priority-based task scheduling can be structured to maximise the likelihood of a system remaining fail operational or fail robust (the latter implying that an occasional job may be skipped if all other deadlines are met). Analysis is developed for fail operational and fail robust behaviour, optimal priority ordering is addressed and an experimental evaluation is described. Overall, the approach presented allows robustness to be balanced against schedulability. A designer would thus be able to explore the design space so defined

    Mixed Criticality on Multi-cores Accounting for Resource Stress and Resource Sensitivity

    Get PDF
    The most significant trend in real-time systems design in recent years has been the adoption of multi-core processors and the accompanying integration of functionality with different criticality levels onto the same hardware platform. This paper integrates mixed criticality aspects and assurances within a multi-core system model. It bounds cross-core contention and interference by considering the impact on task execution times due to the stress on shared hardware resources caused by co-runners, and each task’s sensitivity to that resource stress. Schedulability analysis is derived for four mixed criticality scheduling schemes based on partitioned fixed priority preemptive scheduling. Each scheme provides robust timing guarantees for high criticality tasks, ensuring that their timing constraints cannot be jeopardized by the behavior or misbehavior of low criticality tasks

    DAG Scheduling and Analysis on Multi-core Systems by Modelling Parallelism and Dependency

    Get PDF

    Compensating Adaptive Mixed Criticality Scheduling

    Get PDF
    The majority of prior academic research into mixed criticality systems assumes that if high-criticality tasks continue to execute beyond the execution time limits at which they would normally finish, then further workload due to low-criticality tasks may be dropped in order to ensure that the high-criticality tasks can still meet their deadlines. Industry, however, takes a different view of the importance of low-criticality tasks, with many practical systems unable to tolerate the abandonment of such tasks. In this paper, we address the challenge of supporting genuinely graceful degradation in mixed criticality systems, thus avoiding the abandonment problem. We explore the Compensating Adaptive Mixed Criticality (C-AMC) scheduling scheme. C-AMC ensures that both high- and low-criticality tasks meet their deadlines in both normal and degraded modes. Under C-AMC, jobs of low-criticality tasks, released in degraded mode, execute imprecise versions that provide essential functionality and outputs of sufficient quality, while also reducing the overall workload. This compensates, at least in part, for the overload due to the abnormal behavior of high-criticality tasks. C-AMC is based on fixed-priority preemptive scheduling and hence provides a viable migration path along which industry can make an evolutionary transition from current practice
    corecore